41 research outputs found

    Relevancy in Problem Solving: A Computational Framework

    Get PDF
    When computer scientists discuss the computational complexity of, for example, finding the shortest path from building A to building B in some town or city, their starting point typically is a formal description of the problem at hand, e.g., a graph with weights on every edge where buildings correspond to vertices, routes between buildings to edges, and route-distances to edge-weights. Given such a formal description, either tractability or intractability of the problem is established, by proving that the problem either enjoys a polynomial time algorithm or is NP-hard. However, this problem description is in fact an abstraction of the actual problem of being in A and desiring to go to B: it focuses on the relevant aspects of the problem (e.g., distances between landmarks and crossings) and leaves out a lot of irrelevant details. This abstraction step is often overlooked, but may well contribute to the overall complexity of solving the problem at hand. For example, it appears that “going from A to B” is rather easy to abstract: it is fairly clear that the distance between A and the next crossing is relevant, and that the color of the roof of B is typically not. However, when the problem to be solved is “make X love me”, where the current state is (assumed to be) “X doesn’t love me”, it is hard to agree on all the relevant aspects of this problem. In this paper a computational framework is presented in order to formally investigate the notion of relevance in finding a suitable problem representation. It is shown that it is in itself intractable in general to find a minimal relevant subset of all problem dimensions that might or might not be relevant to the problem. Starting from a computational complexity stance, this paper aims to contribute a computational framework of ‘relevancy’ in problem solving, in order to be able to separate ‘easy to abstract’ from ‘hard to abstract’ problems. This framework is then used to discuss results in the literature on representation, (insight) problem solving and individual differences in the abstraction task, e.g., when experts in a particular domain are compared with novice problem solvers

    Intentional Communication: Computationally Easy or Difficult?

    Get PDF
    Human intentional communication is marked by its flexibility and context sensitivity. Hypothesized brain mechanisms can provide convincing and complete explanations of the human capacity for intentional communication only insofar as they can match the computational power required for displaying that capacity. It is thus of importance for cognitive neuroscience to know how computationally complex intentional communication actually is. Though the subject of considerable debate, the computational complexity of communication remains so far unknown. In this paper we defend the position that the computational complexity of communication is not a constant, as some views of communication seem to hold, but rather a function of situational factors. We present a methodology for studying and characterizing the computational complexity of communication under different situational constraints. We illustrate our methodology for a model of the problems solved by receivers and senders during a communicative exchange. This approach opens the way to a principled identification of putative model parameters that control cognitive processes supporting intentional communication

    Ignorance is Bliss: A Complexity Perspective on Adapting Reactive Architectures

    Get PDF
    Abstract-We study the computational complexity of adapting a reactive architecture to meet task constraints. This computational problem has application in a wide variety of fields, including cognitive and evolutionary robotics and cognitive neuroscience. We show that-even for a rather simple world and a simple task-adapting a reactive architecture to perform a given task in the given world is N P -hard. This result implies that adapting reactive architectures is computationally intractable regardless the nature of the adaptation process (e.g., engineering, development, evolution, learning, etc.) unless very special conditions apply. In order to find such special conditions for tractability, we have performed parameterized complexity analyses. One of our main findings is that architectures with limited sensory and perceptual abilities are efficiently adaptable

    Parameterized Completeness Results for Bayesian Inference

    Get PDF
    We present completeness results for inference in Bayesian networks with respect to two different parameterizations, namely the number of variables and the topological vertex separation number. For this we introduce the parameterized complexity classes W[1]PP and XLPP, which relate to W[1] and XNLP respectively as PP does to NP. The second parameter is intended as a natural translation of the notion of pathwidth to the case of directed acyclic graphs, and as such it is a stronger parameter than the more commonly considered treewidth. Based on a recent conjecture, the completeness results for this parameter suggest that deterministic algorithms for inference require exponential space in terms of pathwidth and by extension treewidth. These results are intended to contribute towards a more precise understanding of the parameterized complexity of Bayesian inference and thus of its required computational resources in terms of both time and space

    Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics

    Full text link
    The massive use of artificial neural networks (ANNs), increasingly popular in many areas of scientific computing, rapidly increases the energy consumption of modern high-performance computing systems. An appealing and possibly more sustainable alternative is provided by novel neuromorphic paradigms, which directly implement ANNs in hardware. However, little is known about the actual benefits of running ANNs on neuromorphic hardware for use cases in scientific computing. Here we present a methodology for measuring the energy cost and compute time for inference tasks with ANNs on conventional hardware. In addition, we have designed an architecture for these tasks and estimate the same metrics based on a state-of-the-art analog in-memory computing (AIMC) platform, one of the key paradigms in neuromorphic computing. Both methodologies are compared for a use case in quantum many-body physics in two dimensional condensed matter systems and for anomaly detection at 40 MHz rates at the Large Hadron Collider in particle physics. We find that AIMC can achieve up to one order of magnitude shorter computation times than conventional hardware, at an energy cost that is up to three orders of magnitude smaller. This suggests great potential for faster and more sustainable scientific computing with neuromorphic hardware.Comment: 7 pages, 4 figures, submitted to APL Machine Learnin
    corecore